Seventh Framework Programme

ثبت نشده
چکیده

sensor independent features. Different physical quantities can provide the same abstract information about an activity. Thus, for example, on-body inertial sensors, clothing-integrated textile elongation sensors, and visual tracking, all give information about body parts trajectories. If the classifiers are trained on such trajectories rather than on raw sensor signals, then the classification system will be able to easily tolerate sensor modality changes. With respect to such abstract features we intend to: • Define abstract feature sets for the most common activity recognition problems • Show how such features can be computed from different sensor configuration. In particular demonstrate how dynamic changes in the sensor configuration can be handled. We will apply and adapt different variability tolerant signal conditioning methods to the computation of the abstract features (see previous sub-objective) • Show how differences in the levels of detail, reliability and accuracy that different types of sensors will provide for a certain abstract feature can be handled. Consistent methods are needed to specify how such differences propagate to the features and how the following stages of the recognition chain can be made aware of such changes. Success Criteria. Demonstration of several (at least 8) types of abstract features and the fact that they can be computed using different sensor combinations. The recognition accuracy of the system using the abstract features should not vary by more then a few percent when different sensing modalities are used. Initially we will work with abstract features related to body motion, location, and interaction with devices. Objective #4: Machine learning algorithms optimized for opportunistic networks Opportunistic classifiers: we will use machine learning techniques to develop improved classification algorithms for activity recognition. In order to be suitable for dynamically changing sensor networks OPPORTUNITY algorithms should exhibit the following properties: • Graceful performance degradation with respect to changes in the quality of the input signal • Provide a measure of the reliability of their decisions, taking into account the (estimated or reported) uncertainty of available inputs • Allow for fast training, and online adaptation incorporating supervisory information provided either by the user or by an external system (c.f. WP3 on dynamic adaptation and autonomous evolution) • Achieve signal segmentation and classification respecting application-specific constraints of pervasive and wearable computing (e.g. real-time operation, computational cost) OPPORTUNITY (225938) Annex 1 Version 9 (26/09/2008) Approved by EC on (15/10/2008) Page 11 of 141 Success Criteria. The success criterion is the comparison of the opportunistic classifiers to state of the art dedicated classifiers on a set of realistic problems. We aim at a recognition rate comparable to the dedicated classifiers (not more than 10% to 20% below). We will do a systematic performance evaluation of the developed classifiers with respect to sensitivity to signal noise, training requirements, and their suitability for online implementations Opportunistic Classifier Fusion: Develop and adapt classifier fusion methods able to cope with changes in the availability, type, and characteristics of their input classifiers/sensors. To this end we will: • Make a comparative assessment of classifier fusion methods with emphasis on the specific characteristics of opportunistic sensor setups, such as scalability and robustness. • Develop methods for dynamic selection and fusion of sensing modalities with respect to applicationdefined requirements. • Develop fault-recovery mechanisms based on the addition or removal of input channels based on the reliability of available sensors. Success Criteria. Again, a comparison of our system against dedicated recognition systems will be made, aiming for not more than 10% to 20% performance difference. Moreover, opportunistic decisions based on classifier fusion are expected to outperform dedicated systems in case of sensor failure or sensor network reorganization. Opportunistic fusion will be evaluated in terms of the performance degradation and faultrecovery in cases of sensor noise and sensor failure, as well as its ability to perform dynamic input selection based on the reliability of available sensors. Objective #5: Unsupervised dynamic adaptation System modelling of context recognition systems: we will develop, based on information theoretical models and empirical approaches: • Models linking system configuration to multiparametric performance metrics focusing on the specific properties of opportunistic systems • Methods to quantify the benefit resulting from including specific additional sensors and features based on the information provided in the sensor meta description, as well as runtime evaluation of channel’s information content. Dynamic adaptation of context recognition systems: we develop dynamic adaptation methods to cope with rapid changes in sensor configurations (e.g. change in desired performance, or re-occuring changes in number of Self-* sensor). To this end we: • Develop heuristics, based on system models, for the optimal dynamic adaptation of an opportunistic system in a given situation. The adaptivity dimensions are defined by the system performance models and include as a minimum the linkage between sensor number and performance goal. Success Criteria. The success criteria will be the ability of our models to predict performance gains of our system when using various sensor combinations. We aim the model to be accurate within 10%. Objective #6: Autonomous evolution Runtime supervision. We will develop methods to monitor the performance of the activity recognition system with respect to long term changes in sensor configurations. These methods will: • Provide a confidence assessment of classifier outputs w.r.t. to possible signal degradation (due to e.g. sensor degradation, slow change in placement/orientation, or long term changes in user action-limb trajectories) in order to trigger a system retraining OPPORTUNITY (225938) Annex 1 Version 9 (26/09/2008) Approved by EC on (15/10/2008) Page 12 of 141 • Provide an indication of correlation between sensors (at the signal, feature, and class output level) in order to support self-supervised learning • Investigate the use of error-related EEG correlates (brain signal patterns occurring when a system deviates from expected behavior) as an endogenous, automatically detected, measure of system performance. Autonomous evolution. We will develop methods for long-term gradual adaptation of the system to a new sensor configuration. These methods are: • Self-supervised learning techniques to train classifiers of sensor devices (not yet capable of Self-*) entering the system. • Self-supervised learning techniques to re-train classifiers of sensors when long term sensor degradation is observed. • Performance metrics characterizing online adaptation. This includes traditional machine learning performance metrics (precision/recall, ROC curves) and novel metrics suited for autonomous evolution that will indicate adaptation speed, system robustness and stability, evolution of activity class signal templates and attractors. Interactive minimally supervised adaptation: In some cases it may be more valuable to rely on interactive user feedback to supervise system adaptation. These method will: • Evaluate the gain obtained by one time interactive supervision w.r.t. self-supervised learning, on the basis of confidence values and information content in the system parameters and sensors. • Decide when user input shall be queried to minimize user disturbance while maximizing information gain. • Rely on error-related EEG correlates and include them as a self-supervisory feedback to support autonomous evolution. Success Criteria. The success of this objective will be assessed experimentally by quantifying the improvement brought about by the autonomous evolution. Performance will be compared to a trained baseline system not capable of dynamic adaptation while sensor variations are introduced. Variations include: sensor addition, sensor removal, long term slow (w.r.t. activity occurrence dynamics) changes in sensor orientation and placement (with body-worn acceleration sensors in a first step), slow (w.r.t. activity occurrence dynamics) addition of progressively higher signal noise. Performance will be characterized along the metrics introduced above. We aim at achieving sustained performance within the range of adaptation capabilities of the systems. The range of these capabilities will be characterized with respect to the tradeoffs intrinsic to autonomous evolution (e.g. faster adaptation speed vs stability, template evolution v.s. attractor strength). A success criteria is to characterize the level at which achieve autonomous evolution can proceed without user interaction and with which tradeoffs, as well as characterizing the benefits of interactive user feedback and EEG-based feedback. Objective #7: Empirical validation A three-stage empirical validation procedure is followed, starting from simple synthetic activities up to complex recognition scenarios typical of real-world applications. This minimizes risks by ensuring that basic goals are fulfilled, while not limiting the scope of the project. • Stage 1: the methods will be demonstrated in simple activities with limited number of sensors (1-3 sensors). Adaptivity will be demonstrated while recognizing at least 3 modes of locomotion, 10 postures, 10 typical hand gestures and presence/location. Variations will include rotation of body-worn sensors, change in on-body placement, addition of noise, and addition/removal of sensors. OPPORTUNITY (225938) Annex 1 Version 9 (26/09/2008) Approved by EC on (15/10/2008) Page 13 of 141 • Stage 2: the methods will be demonstrated in composite activities that include a larger number of sensors and more variability, including object manipulation, device use and social interactions and cooperation between humans (coordinated physical activities). • Stage 3: the methods are demonstrated in complex scenarios involving real-world gestures and a large number of sensors (the activities stem from the field of indoor activity monitoring, and health and wellness oriented lifestyle monitoring). The sensor set will encompass between 10 and 20 sensors of the most common types such as body mounted motion sensors, microphones, location information, information of device activity, and object usage and motion (exact setup will be determined in the project). For each scenario we will consider at least 10 different sensor configuration. • Opportunistic BCI validation: the last case study comes from the field of mental activity recognition using Brain-Computer Interfaces. In this scenario non invasive electroencephalography (EEG) signals will be used to identify the user’s mental states such as error-detection, anticipation of imaginary movements. Building up on previous research endeavours at EPFL, sensor configurations of 32 and 64 (homogeneous) electrodes will be used to capture the brain electrical activity. Performance changes in both existing and OPPORTUNITY approaches will be assessed with respect to changes in the number of available reliable sensors, as well as changes in the incoming EEG signal. Success Criteria. The ultimate success criterion will be the empirical comparison of our system to state of the art traditional (non opportunistic) activity recognition systems. To this end we will train our system on a large, fixed set of typical sensors. We will then dynamically change the sensor configuration. Using classical recognition methods, a new system needs to be designed and trained for each of such configurations. In contrast, our system will be expected to automatically adapt to the new configuration. For each configuration we will then compare the performance of our opportunistic system to the performance of a state of the art system specifically designed and trained for this configuration. On average we aim to achieve about 80% of the recognition rate of the dedicated system when sensor configuration is not changed. However we expect the opportunistic system to outperform the dedicated system when the configuration of the sensors is changed. Objective #8: Scientific dissemination Scientific dissemination is a key objective of OPPORTUNITY. It includes the highest level of scientific publication, but also includes other means of bringing the methods developed by OPPORTUNITY into the community, such as tutorial and demos at key conferences, and making software packages publicly available under GPL licence. Success Criteria. By month 36, at least 12 journal papers will be published dedicated only to the methods developed within the project. We foresee at least 2 journal paper per partner focusing on its specific domain within the project, and 2 additional papers summarizing the overall project contributions. We will publish a book about opportunistic activity and context recognition systems, to disseminate the knowledge acquired during the project with the highest scientific standards. In addition we will have at least two publications per partner per year in the top conferences in the respective field (top defined as acceptance rates below 30%). Finally over the project course 3 software packages to be released under GPL, one interdisciplinary retreat and one technical workshop. The metrics to measure the effect of our community building efforts include: • Number of people who have signed up to the newslette • Number of attendees at the technical workshop and retreat • Follow-up of the technical workshop and retreat (contacts, request for information, joint projects) • Number of invitations to present the project in the scientific community OPPORTUNITY (225938) Annex 1 Version 9 (26/09/2008) Approved by EC on (15/10/2008) Page 14 of 141 • Number of visiting students working on the project • Number of papers published on the topic by others • Number of citations of our papers by others Objective #9: Facilitating Exploitation As a FET project OPPORTUNITY does not aim at creating directly commercially exploitable results. However, the topic and the expected results are clearly highly relevant for many emerging applications such as Ambient Assisted Living, Mobile Workers Assistance, Interactive Environments, Personal Health Management and Energy Efficient Building Management. The consortium partners are involved in a whole range of projects that work towards such applications. To facilitate the future exploitation of the results of OPPORTUNITY we intend to keep close contact and information flow with such exploitation oriented projects and industrial partners. Specifically we intend to: • Organize technology workshops with presence from application oriented projects and industrial players. • Produce a newsletter and a brochure oriented emphasizing the technology potential of OPPORTUNITY • Present talks at industry/technology oriented conferences/forums (e.g. Embedded World, AAL congress) and fairs. Success Criteria. We will organize at least two technology workshops, and publish a newsletter at least twice per year reaching about 20 industrial organisations. We intend to have between 1 and 2 talks per year at a relevant technology oriented workshop/fair. OPPORTUNITY (225938) Annex 1 Version 9 (26/09/2008) Approved by EC on (15/10/2008) Page 15 of 141 B.1.2 Progress beyond the state of the art The outcomes of OPPORTUNITY are activity and context recognition systems that alleviate the static constraints placed on the context recognition chain. In particular, OPPORTUNITY will facilitate adaptivity to sensor signal degradation, adaptivity to change in system parameters, adaptivity to sensor withdrawal, and ad hoc exploitation of additional resources. Such properties are new and far beyond the state of the art, not only in activity recognition but also in virtually all other sensing and pattern recognition areas. In addition OPPORTUNITY provides systems with the ability to self manage the sensing resources, self configure the recognition chain and evolve strategies to deal with re-occurring settings and long term changes in an unsupervised manner. Again, no system with such properties exist to date for activity and context recognition. From a scientific and technical perspective, additional outcomes of OPPORTUNITY are found in the specific fields that combined make the originality of the OPPORTUNITY approach. Individually, these advances are significant contributions beyond the state of the art in their own right. They include advances in: context/activity recognition; machine learning; cooperative sensor ensembles and sensor networks (software, control and programming paradigms); self-/semi-supervised learning; autonomous evolution of activity/context-recognition systems; embodied/situated view of activity/context recognition; selfsupervision principles (systemuser-interactiveand EEG-basedsupervision); context/activity recognition system modelling; self-describing "smart sensors". We envision for OPPORTUNITY to have a strong explorative nature throughout the project. It will survey, assess and learn from a variety of scientific disciplines, not limited to the key expertise of the consortium. In particular aspects of pattern recognition, adaptation, learning, and robustness are topics of research in neurosciences, cognitive psychology, behavioral sciences, to mention a few. OPPORTUNITY aims to draw from these communities to rethink the problem of activity recognition with a vision larger than an applied machine learning problem. Questions that will be raised include what are activities and how they are defined and understood not only from a sensing and signal processing viewpoint but from a human and user perspective, as an embodied and situated agent immersed in an ambient intelligence environment. OPPORTUNITY will enable the long term autonomous evolution of AmI environments (autonomous recruitment and training of additional sensor devices) which is key to large scale AmI environments. By reconsidering the recognition chain, flexible activity and context recognition goals and priorities also contribute to increased application flexibility. OPPORTUNITY will pave the way to robust context and activity recognition systems suited for real-world use. By removing current ideality and static assumptions OPPORTUNITY will improve comfort, naturalness, flexibility, and suitability to multiple goals of wearable and pervasive computing systems. This will support challenging existing and new real-world application scenarios. As a concrete example, Ambient Assisted Living will become more convenient for elderly or persons with disabilities. B.1.2.1 Baseline: State of the art OPPORTUNITY is grounded in the field of Ambient Intelligence (AmI), specifically the development of methods to infer human activity from body-worn and environment sensors. It is also related to wireless sensor networks (WSN), which are seen here as a technology enabling opportunistic networks. Finally, it touches on issues related to autonomous operation and self-organization which is a topic of many bioinspired computing projects. State of the art research in human activity recognition focuses on the development of methods for the robust recognition of specific trained activities in challenging real-life environments from a pre-defined set of multimodal sensors. Examples of past and present related European projects are SmartIts, WearIT@Work, RELATE, ALLOW, MyHeart, MonAmi. Outside Europe key research groups are among others at MIT, OPPORTUNITY (225938) Annex 1 Version 9 (26/09/2008) Approved by EC on (15/10/2008) Page 16 of 141 GeorgiaTech, CMU, University of Washington, Intel and Microsoft. While this research is diverse both in terms of the targeted activity types and the sensors used, virtually all projects have one thing in common: they assume static, well-defined sensor configuration with constant quality, no degradation, placed at "optimal" locations on the body or in the environment. In addition, the methods are often specifically trained for the target user (user-dependent training). These limitations are a result of the traditional activity/context recognition chain as a sense/classify problem. The traditional recognition chain consists of sensing, pre-processing, feature extraction, classification and higher level processing (e.g. decision fusion, reasoning). Once the algorithms are designed and the classifiers trained, this recognition chain is fixed, statically defined, and programmed into a system. The recognition chain does not allow for flexibility. All the components of the recognition chain require static, a-priori defined operating rules. Consequently, the traditional activity recognition chain is simply not suited for activity recognition in opportunistic networks. Within this project we remove these static assumptions by developing methods for an opportunistic, adaptive use of available sensors. There is a networking element to the development of opportunistic activity recognition systems. Available sensors must be discovered, their capabilities must be queried, and data must be exchanged between sensor nodes (capable of doing local processing at different levels of the activity recognition chain) and processing elements (e.g. PDA). The wireless sensor network community investigates these networking aspects. Examples of European projects dealing with mobile and ad-hoc networking include e-SENSE, and ongoing projects towards an "Internet of Things" (e.g. SENSEI). Multi-agent systems also provide means for distributed autonomous agents to cooperate towards a common goal (e.g. JADE framework, EU CASCOM project). Our aim is to build on existing competencies in opportunistic WSNs [Fers06] and the systems already built towards these kinds of spontaneous networking [Fers07b][Fers08], but also to reuse technological building blocks from WSN and extend them for activity recognition in opportunistic setups. OPPORTUNITY relies on opportunistic data gathering and processing in dynamic environments. As such it will take advantage from recent advances in adaptive middleware and composable software architectures. A number of EU research project address the issues of providing middlewares and software architectures to support context-aware applications in highly dynamic mobile environments (RUNES, MUSIC, HYDRA, MADAM, e-sense, SENSEI). In comparison to OPPORTUNITY, these approaches tend to operate at a higher abstraction level by assuming that mostly self-contained (sensor and processing) entities are able to provide contextual information that can be composed into context-aware services in a flexible way. This provides answers to robust data acquisition, flexible execution abstraction mechanisms, and the provision of services through the composition of context-aware entities. OPPORTUNITY will learn and benefit from these advances. However, OPPORTUNITY focuses on an essential underpinning of context-aware systems that is not tackled by adaptive middleware and composable software architectures, that is the problem of inferring the contextual information in opportunistic sensor data in essence an adaptive pattern recognition problem. The proposed opportunistic approach to activity recognition involves robust self-organization and autonomous operation of at times large ensembles of diverse sensing nodes. Analogies with biological systems are often followed to investigate and design of such systems (e.g. artificial evolution with genetic algorithms, fault-detection with artificial immune systems). Several EU research projects follow bioinspiration. This includes BIONETS, CASCADAS and HAGGLE. While some of the objectives of these projects (self-organization, autonomous operations) bear similarities with ours, these projects do not consider the problem of human activity recognition. Activity recognition in opportunistic networks raises a specific set of challenges in signal processing, data segmentation and classification, which is the focus of this project. In summary, state of the art activity and context recognition systems share the following limitations: • They assume static, a-priori defined sensor configurations • They assume that sensors do not exhibit faults or degradation OPPORTUNITY (225938) Annex 1 Version 9 (26/09/2008) Approved by EC on (15/10/2008) Page 17 of 141 • They are not suited to use additional sensing resources discovered at run-time • They cannot cope with change in sensor parameters (e.g. sample rate, resolution, accuracy) • They are adversely affected by changes in sensor placement or orientation • As a result they tend to be user-specific (sensitivity to changes in body proportions) • In a broader sense, state of the art approaches are ill suited to enable large scale, open-ended AmI environments: as new sensor are added, or new context needs to be recognized, specific retraining is required for each sensor and context. B.1.2.2 Advances over the State of the Art This project goes beyond the state of the art in context/activity recognition as it removes the up-to-now static assumptions on sensor placement, availability and characteristics, by developing methods for an opportunistic, adaptive use of available sensors. In particular, from an activity and context-recognition perspective it provides the following features not available with traditional approaches: adaptivity to sensor signal degradation; adaptivity to change in system parameters; adaptivity to sensor withdrawal; opportunistic exploitation of additional resources. Specific contributions beyond state of the art are detailed hereafter. We envision OPPORTUNITY with a strong explorative nature. The current state of the art approaches will be considered in the ongoing reflexion towards opportunistic activity recognition systems. OPPORTUNITY will capitalize and embrace these approaches, but also will take inspiration from fields outside of engineering (e.g. biology, cognitive psychology) in order to stimulate the reflexion on the problem of machine activity recognition Opportunistic context and activity recognition algorithms Human activity recognition by means of opportunistic, spontaneous sensor ensembles poses fundamental challenges in terms of methods for dynamic adaptation of the entire recognition system according to the prevailing situation or context, relying on operational principles and algorithms from control theory, machine learning and mechanisms taking inspiration from biological and even economic systems. Abstract intermediate featuresintermediate features We will develop an intermediate feature sets that abstracts classifiers from specific sensors, in order to provide sensor independence. Independence includes placement/orientation independence (depending on sensor type), operating parameter independence, and even sensor type independence). We will identify placement invariant sensors and features, as well as develop appropriate signal transformations in order to achieve placement independent operation. This will yield to advances in "smart-sensors" capable of self-description and directly providing intermediate features that seamlessly enable activity recognition when placed into existing infrastructure (e.g. ambient or body-worn network), towards a more widespread use of activity and context-recognition systems. Inclusion of accuracy ("quality of sensing") as part of self-description enables to include reliability and accuracy of sensing and context in user-feedback. A simple example of intermediate features are body limb trajectories. A wide range of sensors can be used for tracking limbs (accelerometers, gyroscopes, magnetic trackers, optical trackers etc.) and all of them can be mapped onto spatial trajectories of varying precision and accuracy. Another example is a smart sensor capable of detecting body location (e.g. by generalization and extension of [Kunze05,Kunze07b]) and adjusting signal processing parameters accordingly (or even directly providing intermediate features). For example an acceleration sensor placed at the limb extremity will be more OPPORTUNITY (225938) Annex 1 Version 9 (26/09/2008) Approved by EC on (15/10/2008) Page 18 of 141 sensitive than one placed at a joint, due to the higher centrifugal force when the limb rotates around the joint. By detecting body placement the acceleration values can be normalized to a reference body location. Further example include sensors which operate at various sample rate or resolution (e.g. for energy reasons), but provide internally the required abstraction by transforming the raw signals to intermediate features (with the corresponding tradeoffs, e.g. confidence, signal-noise ratio). The key outcome will be a set of mappings from sensors to intermediate features. Since this is eminently sensor specific, the outcome will take the form of an exhaustive survey of sensors used in activity recognition, the characteristic components they measure, together with an investigation of suitable intermediate feature representation, their characteristics (e.g. costs/benefits), and algorithms to transform raw sensor data to intermediate features. Generic methodologies, enabling application to novel sensor domains, will be provided. As much as possible, results will be kept generic to enable application to other sensing domains. This approach to abstract intermediate feature representation is a key contribution of OPPORTUNITY beyond state of the art. Opportunistic classifiers In order to make best use of available sensor data we develop new modular opportunistic classifiers tailored for activity recognition in opportunistic setups. These classifiers can dynamically deal with changes in the number and reliability of input features (depending on available sensors). They allow to incorporate new features without the need for complete re-training. They provide graceful degradation when the feature set is reduced. They are parameterized using a sensor self-description (e.g. sampling rate, accuracy, placement), in order to make best use of available information. They provide mechanisms for assessing reliability of their own decisions. Moreover, they are able to incorporate new knowledge in an efficient way, through online training (e.g., in the case a new sensor is added to the system). In addition, these classifiers are suited for online processing, with limited computational/memory requirements to operate in low power devices and miniature wearable systems. In addition, we develop opportunistic classifier fusion. Classifier fusion methods will in the same way be modular with respect to the available number, type, and characteristics of classifiers/sensors. These methods take into account the uncertainty of each individual input stream to dynamically select the best configuration. This process will be performed in order to remove noisy or faulty channels, as well as to incorporate recently discovered sensors in the opportunistic network. This dynamical process may take also into account taskdependent constraints in terms of performance and energy consumption. Previous work have suggested the suitability of classifier fusion in the implementation of activity recognition systems able to deal with a variable set of sensors, [Zappi07], in particular, by assessing the performance degradation upon sensor failure. However, these studies are mainly based on homogeneous sensors and do not provide any mechanism to recover from these faults, nor to incorporate new sensor into the system, nor to provide a measure of decision uncertainty. Opportunistic classifiers and classifier fusion go beyond the state of the art along these dimensions. This translates in advances in machine learning tools for opportunistic context/activity recognition in wearable and pervasive/ubiquitous computing systems. This also yield advances in context recognition system modelling, in particular with respect to enabling the intelligent use of opportunistic resources to achieve a desired performance characteristic (e.g. power/performance tradeoffs given dynamic availability of resources). Dynamic adaptation and long-term autonomous evolution OPPORTUNITY (225938) Annex 1 Version 9 (26/09/2008) Approved by EC on (15/10/2008) Page 19 of 141 A critical feature of opportunistic systems is their ability to cope with dynamic environmental changes. Such changes include: signal drift/degradation (e.g. due to shifts in sensor placement/orientation), progressive changes in user activity goal signal template linkage, and addition/removal of sensing resources. For handling such changes, OPPORTUNITY provides specific adaptation mechanisms: Dynamic adaptation copes with rapid changes in sensor configurations or application performance goals. It relies on models relating sensor configuration and context recognition chain parameters to performance metrics. These models are formalized so that they allow for an efficient online adaptation of the sensor configuration and context recognition chain to reach a performance goal. Heuristics to find the appropriate settings for a desired performance goal are devised on this basis. Dynamic adaptation plays hand in hand with opportunistic modular classifiers and classifier fusion (see section on opportunistic classifiers). Autonomous evolution deals with long term gradual adaptation of the system to a new environment, user or sensor configuration. On one hand, unsupervised (data-driven) techniques are applied to achieve feature and classifier adaptation. On the other hand, methods related to self-supervised (using system-, user-, and errorrelated EEG correlates feedback to supervise learning) and semi-supervised learning ([Chapelle06,Vapnik98,Huang06]) will be applied to take advantage of information provided by the system itself, or provided by the user in particular situations, to retrain classifiers towards a new adapted signal template. This will take into account the different uncertainty levels of the existing sensor configuration and the likelihood of changes in the sensor configuration (e.g. signal degradation vs normal activity variability). In order to exploit opportunistically added sensing resources, the OPPORTUNITY context recognition chain will self-adapt to take advantage of new information sources. Concretely, classifiers of newly discovered sensors will be trained using information from the current system, and/or provided by the users with minimally supervised interactive feedback. This will allow an existing context-aware system to incorporate new sensors without requiring specific off-line training or calibration processes. Indeed, new, virgin sensors will in this way be trained from an already operational context-aware system. We will capitalize on error-related EEG correlates (EEG signals automatically detected arising when a system's behavior deviates from expectations) to obtain an endogenous measure of system performance to guide system adaptation and autonomous evolution [Chavarriaga07,Ferrez08]. We will define principles for the inclusion of minimal user feedback (i.e. maximizing information gain while minimizing user disturbance) in order support interactive online adaptation. These principles will guide the choice of the feedback mechanism for online adaptation, using system-, user-, or EEG-based feedback depending on confidence values, system stability goals and user interaction costs. Dynamic adaptation and autonomous evolution are key contributions beyond the state of the art in activity and context recognition systems. Overall this contributes to context recognition systems capable of operating in open-ended environments, i.e. environments where sensors may not all be capable of self-description, where the deployed infrastructure may change over time, or where the set of activities to recognize may change over time. Advances in Ambient Intelligence Environments The above advances in opportunistic context and activity recognition algorithms improve the robustness and suitability of activity and context recognition system to real-world environments. From a user point of view, this leads to better comfort of use of activity recognition systems. The flexibility in how sensors are opportunistically used for various applications lowers the need to deploy specific sensor setups for each application. The ability of the system to adapt to various sensor placements on the body improves the wearability and user acceptance. It allows the user to change the sensor placement on purpose (e.g. the location of a sensor may be changed if it becomes uncomfortable). This approach also allows the system to cope with changes in sensor placements occurring slowly over time, or sensor failures, leading to OPPORTUNITY (225938) Annex 1 Version 9 (26/09/2008) Approved by EC on (15/10/2008) Page 20 of 141 improved robustness compared to state of the art approaches. Placement and sensor independence also leads to reduced user-dependence. This advances the state of the art since current activity recognition systems must be individually trained to achieve best performance, which makes deployment tedious. From the point of view of the validation scenarios the outcome of OPPORTUNITY is to demonstrate that robust activity recognition can be performed, despite the usual variability in sensor placement and orientation typical of sensors placed on-body and/or integrated into clothing, mobile devices, or the environment. This natural variability is nowadays a challenge to state of the art approaches. Advances in Opportunistic Brain-Computer Interfaces The methods developed within OPPORTUNITY are generalizeable. In other words, advances in machine learning, dynamic adaptation, spontaneous cooperative sensing, sensor self-configuration, and robustness and fault tolerance are not confined to human activity recognition. We will demonstrate this in EEG-based Brain-Computer Interfaces, which is a complex cognitive context recognition task. EEG-based BCI typically relies on a large number of homogeneous sensors (electrodes). Sensor drift and intermittent skin contact (both normal when an EEG electrode cap is worn over long period of time) are common problems. Despite efforts to build adaptive BCIs (see section A.1), these interfaces remain highly sensitive to sensor failures or noise. Moreover, up to our knowledge, no current system is endowed with the capability of dynamically changing the channels or features used for cognitive state recognition. The methods developed by OPPORTUNITY have the potential to improve the robustness of BCI systems by dynamically selecting the appropriate set of electrodes required to achieve successful operation and, upon detection of failure, recruit additional channels in order to minimize the performance degradation. Moreover, these systems will also be able to adapt to inherent changes in the EEG signal. Dynamic adaptation mechanisms can be used to assess the system performance and prompt an appropriate corrective action (e.g. removal of noisy channels and/or the adaptation of the classification process). Spontaneous goal-oriented sensing ensembles Activity and context recognition in opportunistic networks requires data acquisition about physical phenomena. We pursue spontaneous goal-oriented sensing ensembles, spanning software architectures, methods and control algorithms to enable self-organizing sensor networks, and programming models to effectively implement means to acquire data relevant to activity and context recognition. To meet the challenges of OPPORTUNITY raised in the previous sections w.r.t. to the way services, data, and resources, are managed and orchestrated, a re-thinking of traditional middleware and service is required. OPPORTUNITY will contribute to this research in several areas, by: • identifying an innovative model for service and data provisioning that revises the typical architecture of current middleware and service frameworks towards coordination architectures, describing the spontaneous, yet cooperative interactions among sensing entities, to overcome current limitations; • developing a more general approach for sensor data collection, than those proposed by recent pervasive and ubiquitous middlewares and service frameworks; • proposing innovative and more general solutions for the automatic (self-organized and self-configuring) collection, aggregation and interpretation of data, but also services, and resources, and their dynamic interactions. Going beyond service models and middleware infrastructures that typically provide specific functionalities (e.g., sensor data fusion, context recognition, situation aware software adapters, etc.) to support application development, OPPORTUNITY will cherry-pick and extend the best and most promising solutions offered by several of these models and middleware. OPPORTUNITY (225938) Annex 1 Version 9 (26/09/2008) Approved by EC on (15/10/2008) Page 21 of 141 Concerning approaches to model and build self-organizing/self-adaptive applications, a variety of heterogeneous proposals exist for both the basic components (e.g. reactive agents in agent based middleware architectures [Par07], and proactive and goal-oriented ones [Tum05]) and their interactions (e.g., pheromones [Par97], virtual fields [Bab06, Mam06, MamZ06], socially-inspired communication mechanisms [Jel04, JelB05, HalA06], and smart data structures [JulR06, Riv07]). Another promising research avenue is based on the recently proposed approaches for automatic service composition based on semantic, goal-oriented, pattern-matching [FujS06, Qui07, Maz07]. The basic idea in these approaches is that semantic description can be attached to services, describing what a service can provide to other services and what it requires from other services. On this basis, automatic mechanisms (typically centralized) for pattern-matching can be enforced for composing services in an unsupervised way. Two interesting contributions related to these concepts are the work on “knowledge network” performed within the CASCADAS project [Bau07], and the work on “dynamic context-driven organizations” [Hae07]. [Bau07] proposes an approach for self-organizing contextual information into sorts of structured collections of related knowledge items, supporting services in reaching a comprehensive understanding of “situations”. In [Hae07], a set of evolution rules defined in a coordination substrate determine how agents can dynamically join and leave organizations based on their actual configuration. Research on existing formal frameworks on rewriting systems [Ban01], process algebras [Mil99], modal logics [Par05], and chemical-oriented computational models [Har86, Fis00] are interesting starting points to study how components can be flexibly and dynamically matched with each other to create composite highlevel services. The approach envisioned in [But02, BeaB06], but then not fully developed, consists of code capsules injected in a sort of dense sensor network (aka “Paintable Computer”). Capsules interact by means of chemical like reactions to trigger novel and composite services. In OPPORTUNITY we will investigate and take inspiration from those approaches in the artificial chemistry area [But02, DitZB01]. Here, services, like chemical reagents, will automatically combine according to the laws of (artificial) physical forces in the environment. Coordination mechanism as they have been exhibited in CHAM (Chemical Abstract Machine) [Ban01][Ban06] appear as a potentially effective means to describe and control distributed spontaneous interactions in sensor ensembles. One of the key outcomes of OPPORTUNITY will be to review, improve and apply some of these abstract techniques in concrete, complex settings, testing their utility in actual tasks, in particular for (sensor) data collection, aggregation and organization. With this regard, OPPORTUNITY will integrate and improve these approaches in several ways: (i) OPPORTUNITY will try to identify a general purpose architecture model that will be able to represent and subsume the above proposals under unifying abstractions. (ii) OPPORTUNITY will address self-organization and goal-oriented sensing in a world of heterogeneous components, in contrast to most of current studies that focus on ensembles of homogeneous components. (iii) The ecological perspective fostered by OPPORTUNITY fits well those ideas in related fields that consider pervasive scenarios as devices and services cooperating autonomously to form global services [Agh08]. The plethora and heterogeneity of devices, services, applications comprising such global services will be taken into account and managed by the patterns and rules governing opportunistic systems. The key innovation of OPPORTUNITY will be to actually create a prototype and test this idea of an "ecology" in concrete validation scenarios. To the best of our knowledge, up to now, this idea has been proposed only as a metaphor, OPPORTUNITY will attempt an actual implementation of this concept. The OPPORTUNITY coordination architecture for spontaneous goal-oriented sensing ensembles will generalize methods based on semantic match-making between services as outlined above. It will put focus on formal methods to describe and control the distributed coordination of goal oriented entities from an interaction architecture point of view. In fact, artificial chemistry-like operations bear potential to be processed much more simply and autonomically than previous semantic discovery and matching services. Finally, it is worth pointing out that the OPPORTUNITY coordination architecture will follow an “ecological” perspective of sensor ensembles, respecting also an opportunistic “growing” of sensor OPPORTUNITY (225938) Annex 1 Version 9 (26/09/2008) Approved by EC on (15/10/2008) Page 22 of 141 populations. Technological progress, monotonic software growth, and communication opportunities not anticipated a-priori force to design sensor ensembles as services sustaining in an ecosystem of other services. In summary, the OPPORTUNITY middleware architecture follows an ecological perspective allowing for self-organization of data and services, self-adaptation, decentralized deployment and exploitation of data and services. This leads to great flexibility in how data about physical phenomena can be opportunistically acquired in efficient and scalable ways for activity/context recognition. Large scale autonomously evolving AmI environments The contribution beyond state of the art is to consider an activity recognition system as an embodied and situated system, that autonomously evolves over time in open-ended environments. This closed-loop perspective capitalizes on self-adaptation through feedback (recurrent connections). Feedback comes from automatically detected error-related EEG correlates [Ferrez08] and from system self-supervision to achieve autonomous evolution. Principles of minimal interactive feedback are included to guide evolution by providing sporadic, minimally distractive, interactive user feedback. Environmental feedback is intrinsic in the system, since the outcomes of context recognition affect upcoming user activities. This closed-loop and self-supervised learning approach towards autonomous evolving AmI environments is strongly related to processes of cognitive development [Weng01], although operating at a high abstraction level in contrast to biomimetic approaches. Therefore, in a broader sense OPPORTUNITY enables Ambient Intelligence (AmI) environments on a scale and with a flexibility that is not possible until now. In current AmI environments, changes in the sensor configuration, or changes in the set of activities to recognize, require to manually configure the system anew. OPPORTUNITY alleviates this limitations. An AmI environment operating with the mechanisms developed within OPPORTUNITY has the ability to learn how to make use of additional sensor modalities (e.g. when a new sensor is placed in the environment or on-body). Over time, the set of activities it recognizes can also be autonomously extended. This occurs when a device capable of recognizing a new set of activities is introduced in that environment. Upon recurring detection of an activity or context by that device, autonomous evolution enables to adjust the operating parameters of the existing sensors within the AmI environment to detect the same activity. In the same way, a mobile device may learn to recognize activities from another one; or a mobile device exchanged between two disconnected AmI environments may enable them to learn to recognize the same set of activities. Altogether, OPPORTUNITY provides mechanisms by which AmI environments can be configured and extended at run-time, potentially cooperatively by a multitude of users. This provides means to deploy and train large scale AmI environments in a flexible manner. B.1.2.3 Performance/research indicators Below are summarized the performance and research indicators against which the project outcomes may be assessed, categorized according to the project's objectives. Objective #1: Self-* capabilities of sensors and sensor ensembles. Self description Success Criteria The ability to provide adequate description of all relevant sensor parameters and variations for different sensor types in the OPPORTUNITY case studies (see objective 7) within a small microcontroller based node (8/16 bit, 4 MHz, <64kByte memory) Dynamic sensor self-characterisation Success Criteria: Demonstration in the OPPORTUNITY cases studies (see objective 7) that we can detect degradation with a precision and recall both in the range of 90% and an accuracy (in terms of degree of degradation) of about plus/minus 20%. Examples of specific types of degradation that we will focus on are position changes of on body motion sensors, intensity variations in on body sound sensors caused by different enclosures (e.g. e mobile phone being put onto a bag) and signal strength variations in RF positioning systems. OPPORTUNITY (225938) Annex 1 Version 9 (26/09/2008) Approved by EC on (15/10/2008) Page 23 of 141 Self Managed interaction and configuration Success Criteria: Demonstration within the OPPORTUNITY case studies of complex interactions and configurations in sensor ensembles of 100 and more sensors. Demonstrate quantitatively the benefit in terms of recognition rate. Objective #2: Creating and Coordinating ad-hoc goal-oriented sensor ensembles. Success Criteria. The success criterion will be the availability of a spontaneous goal-oriented sensing coordination architecture implemented in a software framework that allows to generate sensing missions (goals) from an application at run-time, that plans sensor resources need to accomplish the sensing mission, that uses this plan to acquire and solicit sensors and to configure them as an ensemble, and to coordinate the sensing mission during the whole lifetime of the application, subject to dynamic changes in the sensor population, availability, capability and semantic interoperability. In quantitative terms we will demonstrate the functionality with up to 50 sensor nodes showing quantitative information content of spontaneous assemblies (e.g. through mutual information or test classifications) not more than 20% under the performance of hand optimized systems. Objective #3: Variations tolerant Signal Processing and Feature Extraction Variability tolerant signal conditioning Success Criteria: To demonstrate on specific examples that large (double percentage digits) variations in sensor parameters can be neutralized in the sense that they only cause small (<5%) degradation in recognition performance. Initially we will build on our work on body-worn sensor displacement, followed by sound sensor intensity changes, and indoor localization accuracy variations. Abstract, sensor independent features Success Criteria: Demonstration of several (at least 8) types of abstract features and the fact that they can be computed using different sensor combinations. The recognition accuracy of the system using the abstract features should not vary by more then a few percent when different sensing modalities are used. Initially we will work with abstract features related to body motion, location, and interaction with devices. sensor independent features Success Criteria: Demonstration of several (at least 8) types of abstract features and the fact that they can be computed using different sensor combinations. The recognition accuracy of the system using the abstract features should not vary by more then a few percent when different sensing modalities are used. Initially we will work with abstract features related to body motion, location, and interaction with devices. Objective #4: Machine learning algorithms optimized for opportunistic networks Opportunistic classifiers Success Criteria: The success criterion is the comparison of the opportunistic classifiers to state of the art dedicated classifiers on a set of realistic problems. We aim at a recognition rate comparable to the dedicated classifiers (not more than 10% to 20% below). We will do a systematic performance evaluation of the developed classifiers with respect to sensitivity to signal noise, training requirements, and their suitability for online implementations Opportunistic Classifier Fusion Success Criteria: Again, a comparison of our system against dedicated recognition systems will be made, aiming for not more then 10% to 20% performance difference. Moreover, opportunistic decisions based on classifier fusion are expected to outperform dedicated systems in case of sensor failure or sensor network reorganization. Opportunistic fusion will be evaluated in terms of the performance degradation and fault-recovery in cases of sensor noise and sensor failure, as well as its ability to perform dynamic input selection based on the reliability of available sensors. Objective #5: Unsupervised dynamic adaptation System modelling of context recognition systems Success Criteria: The success criteria will be the ability of our models to predict performance gains of our system when using various sensor combinations. We aim the model to be accurate within 10%. Objective #6: Autonomous evolution Autonomous evolution and Interactive minimally supervised adaptation success criteria: the performance will be compared to a trained baseline system not capable of dynamic adaptation while sensor OPPORTUNITY (225938) Annex 1 Version 9 (26/09/2008) Approved by EC on (15/10/2008) Page 24 of 141 variations are introduced. Variations include: sensor addition, sensor removal, long term slow (wrt activity occurrence dynamics) changes in sensor orientation and placement (with body-worn acceleration sensors in a first step), slow (wrt activity occurrence dynamics) addition of progressively higher signal noise. Performance will be characterized along metrics introduced in this project. We aim at achieving sustained performance within the range of adaptation capabilities of the systems. The range of these capabilities will be characterized with respect to the tradeoffs intrinsic to autonomous evolution (e.g. faster adaptation speed v.s. stability, template evolution v.s. attractor strength). A success criteria is to characterize the level at which achieve autonomous evolution can proceed without user interaction and with which tradeoffs, as well as characterizing the benefits of interactive user feedback and EEG-based feedback. Objective #7: Empirical validation Success Criteria: The ultimate success criterion will be the empirical comparison of our system to state of the art traditional (non opportunistic) activity recognition systems. To this end we will train our system on a large, fixed set of typical sensors. We will then dynamically change the sensor configuration. Using classical recognition methods, a new system needs to be designed and trained for each of such configurations. In contrast, our system will be expected to automatically adapt to the new configuration. For each configuration we will then compare the performance of our opportunistic system to the performance of a state of the art system specifically designed and trained for this configuration. On average we aim to achieve about 80% of the recognition rate of the dedicated system when sensor configuration is not changed. However we expect the opportunistic system to outperform the dedicated system when the configuration of the sensors is changed. B.1.3 S/T methodology and associated work plan B.1.3.1 Overall strategy and general description OPPORTUNIY pursues a high risk, yet well thought through and promising approach to the development of opportunistic activity recognition systems. It is based on a large body of previous research performed by the project partners and a thorough understanding of all the components and possible variations of a recognition system. A hierarchical decomposition of the activity recognition problem enables the project to claim that the OPPORTUNITY solution generalizes well to a broad range of problems. Based on the hierarchical breakdown, an incremental approach has been designed for the project that allows us to pursue an ambitious, high risk end goal without the risk of an ‘all or nothing’ strategy. Instead we work towards the goal in incremental steps, each of them in itself representing a significant scientific advance. The breakdown is also the basis for an incremental approach to validation which will start with simple activity components and then proceed to increasingly complex case studies finally leading to a system demonstrator motivated by and closely related to relevant real life applications such as personal healthcare and adaptive energy management in home and office environments. The work is divided into work packages following a logical partitioning of work. Each work package is lead by a partner with a long history of internationally recognized research work in the corresponding area. Project deliverables and milestones ensure ambitious yet realistic project timing with well defined synchronisation points between different work packages and tasks. The project proposes a complex, large and ambitious work program. The consortium can handle this work program with the requested resources, because a huge amount of experience, algorithms, equipment and conventional activity recognition and sensing setups already exist at the partners labs. OPPORTUNITY (225938) Annex 1 Version 9 (26/09/2008) Approved by EC on (15/10/2008) Page 25 of 141 B.1.3.1.1 Overview An opportunistic mobile system to recognize human activity and user context works as follows: • Data acquisition: sensors providing information about the physical world (or virtual sensors) need to be discovered and networked in order to provide data for context/activity recognition; this data is brought to the context recognition system; • Context recognition instantiation: A context recognition system is instanciated and parameterized according to the sensors available to convert the data into information (user context and activities); • Adaptation: throughout operation the recognition system must keep track of changes in the sensing environment and operating parameters and adapt itself accordingly. The challenges of an opportunistic recognition system stem from the fact that sensors that are discovered can: • be in any place (in the environment, in devices, objects, on the body) and in any orientation • be of any modality (the type of physical quantity that is measured; e.g. motion, light, sound) • have various characteristics (e.g. sample rate, resolution, accuracy, signal to noise ratio) Furthermore, while operating, some of these aspects can change. For instance the body location of a sensors can vary (e.g. a cellphone with a sensor may be placed in various pockets), or sensor characteristics can vary (e.g. change in sample rate according to available energy). An efficient opportunistic system should thus: (i) make a best use of the available resources, and (ii) keep working despite or improve thanks to changes in the sensing environment. We address these challenges to devise opportunistic context and activity recognition systems. The key aspects of our approach (outlined in detail in the following sections) can be summarized as follows: 1. Opportunistic activity/context recognition. We propose a new adaptive, dynamic paradigm for the recognition of context/activities that will replace the traditional static recognition chain. As described in section B.1.1.2. It consists of the following components o Formulation of the recognition task as a flexible, application specific goal that includes the preferences with respect to different recognition parameters (e.g. recall vs. precision) and possible simplifications of the recognition task. o Methods for sensor self characterization (e.g. automatic detection of signal degradation), self description and self configuration o Algorithms and control paradigms for autonomous emergence of cooperative, distributed sensing ensembles optimally suited to provide the optimal information for the requirements of the application specified goal in a given situation. o Signal processing algorithms and feature level abstractions that mask the sensor level variability from the classification stage o Parameterized, adaptive classification and classifier fusion methods that can deal with a broad range of variations in the sensor and feature space. o Methods for unsupervised adaptation of the overall system configuration (combination of and cooperation between the sensing, signal processing feature extraction and classification stages) under dynamically changing conditions. o Methods for long term, unsupervised evolution of the entire system to cope with open-ended environment and optimize the handling of re-occurring configurations 2. Applicability. To ensure that the OPPORTUNITY approach is valid for a broad range of context recognition problems, not just for few specific scenarios investigated in the project, we base our work on a hierarchical decomposition of the activity recognition problem. The decomposition is based on activity components such as location, modes of locomotion (walking, standing etc.), hand activity, interaction OPPORTUNITY (225938) Annex 1 Version 9 (26/09/2008) Approved by EC on (15/10/2008) Page 26 of 141 with objects, and interaction with humans. Complex activities are modeled as combination of these components. 3. Risk management. We use the hierarchical breakdown as a basis for an incremental project structure that avoids the risks of an ‘all or nothing’ approach. In a first stage we deal with individual activity components. This means that we handle constrained problems for which experiments are easily and quickly assembled. The second stage we consider composite activities. Finally we validate the approach on complex example scenarios typical for real-world activity recognition systems. 4. Generalization. In addition, a number of methods that we develop in OPPORTUNITY are generalizeable to other context recognition systems than activity recognition. We demonstrate this by showing how specific cognitive states (e.g. attention, expectation) can be detected on the basis of EEG signals, using the approaches developed within OPPORTUNITY, towards Brain-Computer Interfaces (BCI). 5. Work breakdown. The project is organized in 5 scientific/technical work packages each addressing a key scientific challenge area: cooperative goal oriented information gathering, sensor level adaptation, adaptive classifiers and classifier fusion, dynamic adaptation and autonomous evolution, and empirical evaluation in case studies. Each WP is lead by a partner with a long history of research in the respective area. Deliverables and milestones have been established to ensure an ambitious yet realistic project timing. They are also synchronisation points for the work packages. 6. Dissemination and exploitation. Scientific dissemination is of key importance and will be handled accordingly to raise awareness on the new methods developed in the project. While OPPORTUNITY as a FET project does not aim at directly commercialisable results, the partners are in involved in a wide range of European, national and industry sponsored project in related areas. The partners will actively pursue the dissemination of results into these project to ensure maximum impact and open exploitation avenues. B.1.3.1.2 The Problem Space A major problem facing OPPORTUNITY is the complexity of the problem domain. In this section we discuss the key dimensions of the problem domain which the project needs to address. These are: (1) the types of variability in the sensor configuration that the system is likely to encounter, (2) different timescales and temporal patterns of the variations, and (3) different types of activity recognition problem that opportunistic systems may have to deal with. We present a heuristic, hierarchical breakdown of the activity recognition problem into simple components. This breakdown is the basis for the OPPORTUNITY approach to generalization, complexity reduction and case study design. TYPES OF SENSOR VARIABILITY When speaking about dynamically varying sensor combination it is important to understand that there are several distinctly different types of variability: 1. Signal quality degradation. Even without changes in the sensor configuration there can be variations in the quality of information that an activity recognition system receives. They can be due to sensor misadjustment (e.g. slipping sensors on body-worn motion tracking systems), external disturbances (e.g. signal strength variations caused by moving person in WLAN based positioning systems), or situative variations that diminish the value of certain sensing modalities (e.g. a machine used in a task is exchanged changing the characteristic sound that was used for recognition). In conventional recognition systems such changes lead to rapid, often radical decrease in recognition accuracy. OPPORTUNITY will allow systems to adapt to such changes and reduce the influence of signal quality degradation on recognition accuracy. Possible strategies are dynamic changes in the weights assigned to different sensing modalities (including totally leaving out a sensor), unsupervised retraining, and dynamic addition of information from alternative sensors. 2. Isolated failures. Failure of a single sensor or a sensor group is a common occurrence in long time deployment of activity recognition. The challenge to opportunistic recognition systems is to develop OPPORTUNITY (225938) Annex 1 Version 9 (26/09/2008) Approved by EC on (15/10/2008) Page 27 of 141 strategies for ‘graceful degradation’ in case of such failures. By comparison in the vast majority of current systems failure of one or more sensors leads to total system failure (as classifiers tend to be trained on a specific fixed dimensional feature space). 3. Partial reconfiguration. In principle, isolated failures described above can be seen as a special case of partial reconfiguration. However, for many reasons it makes sense to differentiate between the two. Whereas isolated failures refer to one or at most a few sensors being removed entirely, partial reconfiguration takes place when a significant amount of sensors in the system are changed. This can include removing some sensors, but also adding sensors. Typically partial reconfiguration would occur when the user changes part of his outfit and with it the integrated/attached sensors. Another example is a user with on body sensors moving from one instrumented environment to another (e.g. from home to office). Here the on body configuration stays the same while the external information sources change. Handling partial reconfiguration is more complex than just changing weights. We need adaptive classifiers, sensors independent features or appropriate classifier combination methods to handle this type of variability 4. System change. As an extreme case of reconfiguration we consider the exchange of the (more or less) entire system. Thus after jogging in the morning (with the sensors integrated in his sports accessories) a user would change into his business outfit and go to the office. Alternatively a user whose system primarily relies on environmental sensors for activity recognition would go from one environment to another (e.g from one office to another). 5. High level cooperation. In many scenarios we would be dealing with different, autonomous activity recognition systems. Such systems may share information helping each other understand what is going on. Thus, in a meeting scenario each user may have a separate, different activity tracking system. In addition the meeting room may have some sensing infrastructure. For privacy reasons such systems may not be able to share information on sensor level. Instead filtered, high level activity information (e.g. my user is most probably presenting) would be selectively shared. OPPORTUNITY will deal with such cooperation between devices as part of overall activity recognition chain. TEMPORAL VARIABILITY PATTERNS From the timing point of view, there are different ways in which the variation can happen: 1. Spontaneous, random events. The most obvious cause for change in sensor configuration is a ‘spontaneous’ event such as sensor error, user leaving an appliance behind, or going to a different location. Per definition spontaneous random events can not be foreseen by the system and thus the system can not proactively prepare for them. 2. Periodically re-occurring changes. For most users there are fixed routines that they follow. Thus, on most days, a person would for example start with a workout in a gym, then go to the office, followed by shopping, possibly dinner out and then coming back home. Each of those activities can be associated with a certain loosely defined sensor configuration. A person will mostly (of course not always) work out in the same gym, work in the same office building, shop in the same shops and live in the same house. How much variations there are to the pattern depends on the user. A salesman will for example not work in the office but be travelling to changing customers, where a clerk will spend most of his time in the same office. Thus variations in sensor configurations are not truly random but follow a probabilistically predictable pattern. Unsupervised learning with system spotting reoccurring environments and adapting to them can play an important part in dealing with this type of variability. 3. Gradual evolution. In some cases we can expect variations be a gradual, continuous process rather then a discrete event. This would be the case when an on body sensor begins to slip and becomes gradually more displaced. Similarly you could see the user moving away from an area with high density of WLAN access points which would lead to WLAN based positioning system performance gradually deteriorating. 4. Continuous, random change. Especially in situations where cooperation between systems belonging to different users is relevant (e.g. meeting or conference assistance system) change is likely to occur continuously as users come and go. Similar will be a case for a highly mobile, travelling users who continuously proceed from one environment to another (e.g. shopping and going from one shop to another) and strongly rely on the infrastructure for activity recognition OPPORTUNITY (225938) Annex 1 Version 9 (26/09/2008) Approved by EC on (15/10/2008) Page 28 of 141 HIERARCHICAL BREAKDOWN OF THE ACTIVITY RECOGNITION TASK Human activity recognition encompasses a broad range of applications. Examples are as diverse as tracking industrial assembly activity [Stiefmeier08], monitoring of human nutrition habits [Junker08, Amft05], recognition of different martial arts moves [Heinz06] or following the course of a meeting [Kern03], Renals07]. The key to addressing the application diversity challenge is the observation that despite their diversity many activity recognition problems can be build out of some basic, common components (see e.g. [Lukowicz02]). These are: 1. Hands activity. Much human activity is determined by what we do with our hands. Thus while for example industrial assembly tracking and nutrition monitoring may look like radically different domains, both involve characteristic arms motions. From technical point of view, tracking arms motions is the same problem independently of the application domain. Most current research in this direction is based on inertial sensors (accelerometers, gyroscopes, magnetic field sensors and combination thereof) placed on the arms and wrists [Junker08]. However there is a broad range of other possibilities for sensing gestures including textile bend sensors [Mattmann07], video tracking using external [Just06] and wearable cameras [Starner98], and different stationary motion tracking devices [Mitra07]. 2. General body motion and posture. Posture and what is often referred to as ‘modes of locomotion’ (standing, running, waling, walking up stairs etc.) is an important piece of activity information. We are unlikely to engage in activities such as eating or doing a presentation while running. Modes of locomotion and posture recognition is among the classical and best understood activity recognition tasks. The sensing modalities are mostly the same as for gestures except that the sensor placement is more flexible (essentially anywhere on the body) and the pattern recognition problem simpler. 3. Interaction with devices and objects. A large category of activities involves interaction with objects and devices. Thus in an industrial maintenance task we have interaction with tools and pieces of machinery. In an Ambient Assisted Living scenario the interesting piece of information is the use of household appliances (cooker, coffee machine) and objects such as cutlery or pill boxes. Detecting such interaction works in a similar way independently of the scenario. Sensors range from wrist worn RFID readers [Patterson05], through sound (works especially good for appliances and electric tools such as coffee grinder or drill), to body worn cameras and sensors integrated in the objects. 4. Interaction with other humans. For many applications human interaction is an important aspect. A good example are meeting support, recording and annotation systems, that are a much researched subject [Kern03, Gatica07, Renals07]. In general audio analysis of speaking patterns (user speaking, someone else speaking, ‘cocktail party effect’) is a good indication of different meeting situations. User location (who is next to whom), video analysis and gestures (e.g hand shake, pointing motions, gesticulation) can also be useful information [Hung08, Zhang06]. 5. Generalized location. User location is a crucial piece of information for most activity recognition tasks. In general the required information is not in the form of physical coordinates. Instead semantically meaningful information such as being in a restaurant, in a certain part of a flat (e.g. at the kitchen table) or proximity to other people or devices is needed. There is a multitude of ways to sense location [Hightower01]. They range from expensive and exact systems such as the UBISENSE Ultra Wide Band through simple beacon based concepts (e.g. using Bluetooth or active RFID) to video analysis, auditory scene analysis (recognizing rooms by sound) and inertial navigation. Location information is likely to be the most variable part of activity recognition and be in many cases very application specific. It will be studied in much detail by OPPORTUNITY. 6. Background information. In addition to the explicit sensing described in the previous points, background information about the user and the environment is often useful for activity recognition. Such information can include user habits and his agenda. 7. Physiological parameters. For some applications the physiological and affective state (tired, stressed, health state) may be relevant. Other applications such as health monitoring are explicitly dedicated at physiological parameters. In the former case an opportunistic may be interesting as we may for example try to infer the level of stress from varying information sources such as user motion patterns, voice, OPPORTUNITY (225938) Annex 1 Version 9 (26/09/2008) Approved by EC on (15/10/2008) Page 29 of 141 gesture or long term behavior (deviation from routine). In the later case it is safe to assume dedicated devices to be worn as part of treatment regimen. The above activity breakdown does not claim to be a universally valid, systematically researched taxonomy of human activity (a research challenge in its own right). Instead it is a heuristic breakdown that is based on vast experience of the involved partners with activity recognition. It is applicable to a broad range of activity recognition applications and problems which is sufficient for the purpose of OPPORTUNITY. B.1.3.1.3 The OPPORTUNITY Approach ARCHITECTURE Activity and context recognition is essentially a sense/classify problem. Based on a set of physical signals the system classifies the current situation as belonging to a certain context. The traditional approach to designing activity recognition systems leads to a system design that is static. The individual processing stages, sensors, and application goals are all tightly linked and, in general, changes in any one aspect require the entire system to be redesigned. OPPORTUNITY proposes a novel, dynamic, adaptive paradigm to remove the up-to-now static constraints placed on sensor availability, placement and characteristics. The new paradigm of OPPORTUNITY is illustrated in figure 1.3.1 and explained hereafter. Figure 1.3.1: The approach of OPPORTUNITY to context/activity recognition. It includes ad-hoc cooperative sensing to obtain data about the user and surrounding world; a flexible and parameterizeable opportunistic recognition chain; and runtime supervision and adaptation methods to opportunistically cope with changes. Ad-hoc, cooperative sensing: A mobile device opportunistically exploits information from sensors placed in the users outfit, in the environement and other sources of information to recognize contexts/activities. Sensors are opportunistically interconnected and enable large scale data acquisition about the user and the world surrounding him. OPPORTUNITY (225938) Annex 1 Version 9 (26/09/2008) Approved by EC on (15/10/2008) Page 30 of 141 Opportunistic context recognition chain: The opportunistic context recognition chain is adjustable at all levels (signal pre-processing, feature extraction, classification, decision fusion, higher level processing), in contrast to traditional approaches. The number and parameters of sensors, features, or classifiers used can be dynamically adjusted according to the performance goal. Adaptation: Finally, a key element of the opportunistic approach is dynamic adaptation and autonomous evolution on the basis of self-supervision and system/user feedback. Self-supervision and feedback, with corresponding adaptation strategies, enable to control the parameters of the opportunistic context recognition chain to adapt it to the sensor configuration at hand. This enables rapid dynamic adaptation to spontaneous changes in sensor configurations as well as long term autonomous evolution to gradual changes in sensing environments and users.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2008